chemical weapon
Scientists issue ominous warning over mind-altering 'brain weapons' that can control your perception, memory and behaviour
Charlie Kirk's wife reveals she was'praying to God' she was pregnant when her husband was killed It all seems to be falling apart now! Marriage drama for lawyer mom whose stepdad infamously dropped daughter, 2, to her death off cruise ship... as she debuts raunchy new look and bad boy lover Gavin Newsom's inner circle on edge as multiple aides receive ominous letter from FBI just days after California governor's chief of staff was indicted Full House's Jodie Sweetin reveals how addiction struggle began at 14 at costar Candace Cameron Bure's wedding Cunning new tactic women are using to cheat. Fans turn on RichTok influencer Becca Bloom over shocking comments... as she makes stunning admission about her marriage and her wild extravagance is revealed Slash your cholesterol by a third in just a month... hundreds of thousands are on a new diet that's transforming lives. Top doctor reveals little-known procedure to fix agonizing issue that plagues half of men over 50. It could cure those late-night trips to the bathroom... AND save your sex life World's first lung cancer vaccine to enter clinical trials... but quitting smoking is still recommended as top way to avoid developing the disease First pieces of $20B trove retrieved from 300-year-old'Holy Grail' shipwreck off Colombia Curse of $30m'Netflix mansion' where Meghan and Harry declared war on the Royal Family as owner takes drastic action to sell it Scientists issue ominous warning over mind-altering'brain weapons' that can control your perception, memory and behaviour Mind control weapons may sound like something from a dystopian science fiction film, but experts now say they are becoming a reality.
- Asia > Russia (0.28)
- South America > Colombia (0.24)
- Asia > China (0.14)
- (16 more...)
- Personal (1.00)
- Research Report > New Finding (0.34)
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
Pan, Alexander, Chen, Lijie, Steinhardt, Jacob
Interpretability methods seek to understand language model representations, yet the outputs of most such methods -- circuits, vectors, scalars -- are not immediately human-interpretable. In response, we introduce LatentQA, the task of answering open-ended questions about model activations in natural language. Towards solving LatentQA, we propose Latent Interpretation Tuning (LIT), which finetunes a decoder LLM on a dataset of activations and associated question-answer pairs, similar to how visual instruction tuning trains on question-answer pairs associated with images. We use the decoder for diverse reading applications, such as extracting relational knowledge from representations or uncovering system prompts governing model behavior. Our decoder also specifies a differentiable loss that we use to control models, such as debiasing models on stereotyped sentences and controlling the sentiment of generations. Finally, we extend LatentQA to reveal harmful model capabilities, such as generating recipes for bioweapons and code for hacking.
- Europe > Austria > Vienna (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay > Golden Gate (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- Government > Military (0.94)
- (4 more...)
Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance
Wasil, Akash R., Barnett, Peter, Gerovitch, Michael, Hauksson, Roman, Reed, Tom, Miller, Jack William
International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.
- Asia > Russia (1.00)
- Europe > Russia (0.32)
- Asia > Middle East > Iran (0.31)
- (15 more...)
- Overview (1.00)
- Research Report (0.82)
Russia-Ukraine war: List of key events, day 798
New drone footage obtained by The Associated Press news agency showed how months of relentless Russian artillery pounding has devastated Chasiv Yar. The town was once home to 12,000 people, but the footage reveals it is now almost deserted and barely a building remains intact. The United States accused Russia of breaking the international ban on chemical weapons by using the choking agent chloropicrin against Ukrainian troops. Chloropicrin is listed as a banned agent by the Hague-based Organisation for the Prohibition of Chemical Weapons (OPCW). The US said Moscow was also deploying riot control agents "as a method of warfare" in Ukraine.
- Asia > Russia (0.74)
- Europe > Ukraine (0.67)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.38)
- (3 more...)
How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries
Banerjee, Somnath, Layek, Sayan, Hazra, Rima, Mukherjee, Animesh
In this study, we tackle a growing concern around the safety and ethical use of large language models (LLMs). Despite their potential, these models can be tricked into producing harmful or unethical content through various sophisticated methods, including 'jailbreaking' techniques and targeted manipulation. Our work zeroes in on a specific issue: to what extent LLMs can be led astray by asking them to generate responses that are instruction-centric such as a pseudocode, a program or a software snippet as opposed to vanilla text. To investigate this question, we introduce TechHazardQA, a dataset containing complex queries which should be answered in both text and instruction-centric formats (e.g., pseudocodes), aimed at identifying triggers for unethical responses. We query a series of LLMs -- Llama-2-13b, Llama-2-7b, Mistral-V2 and Mistral 8X7B -- and ask them to generate both text and instruction-centric responses. For evaluation we report the harmfulness score metric as well as judgements from GPT-4 and humans. Overall, we observe that asking LLMs to produce instruction-centric responses enhances the unethical response generation by ~2-38% across the models. As an additional objective, we investigate the impact of model editing using the ROME technique, which further increases the propensity for generating undesirable content. In particular, asking edited LLMs to generate instruction-centric responses further increases the unethical response generation by ~3-16% across the different models.
- Asia > Singapore (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (2 more...)
Are killer robots the future of war?
Humanity stands on the brink of a new era of warfare. Driven by rapid developments in artificial intelligence, weapons platforms that can identify, target and decide to kill human beings on their own -- without an officer directing an attack or a soldier pulling the trigger -- are fast transforming the future of conflict. Officially, they are called lethal autonomous weapons systems (LAWS), but critics call them killer robots. Many countries, including the United States, China, the United Kingdom, India, Iran, Israel, South Korea, Russia and Turkey, have invested heavily in developing such weapons in recent years. A United Nations report suggests that Turkish-made Kargu-2 drones in fully-automatic mode marked the dawn of this new age when they attacked combatants in Libya in 2020 amid that country's ongoing conflict. Autonomous drones have also played a crucial role in the war in Ukraine, where both Moscow and Kyiv have deployed these uncrewed weapons to target enemy soldiers and infrastructure.
- North America > United States (0.67)
- Europe > United Kingdom (0.49)
- Asia > China (0.26)
- (35 more...)
- Government > Military > Army (0.57)
- Government > Regional Government > North America Government (0.47)
Artificial Intelligence and Arms Control
Scharre, Paul, Lamberth, Megan
Potential advancements in artificial intelligence (AI) could have profound implications for how countries research and develop weapons systems, and how militaries deploy those systems on the battlefield. The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems, while others have argued that AI may be too diffuse to control. This paper argues that while a ban on all military applications of AI is likely infeasible, there may be specific cases where arms control is possible. Throughout history, the international community has attempted to ban or regulate weapons or military systems for a variety of reasons. This paper analyzes both successes and failures and offers several criteria that seem to influence why arms control works in some cases and not others. We argue that success or failure depends on the desirability (i.e., a weapon's military value versus its perceived horribleness) and feasibility (i.e., sociopolitical factors that influence its success) of arms control. Based on these criteria, and the historical record of past attempts at arms control, we analyze the potential for AI arms control in the future and offer recommendations for what policymakers can do today.
- Asia > Russia (0.69)
- Asia > Middle East > Syria (0.46)
- Oceania > Australia (0.28)
- (78 more...)
- Research Report (1.00)
- Overview (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Europe Government (1.00)
- Government > Military > Navy (1.00)
- Government > Military > Army (1.00)
Google bans deepfake-generating AI from Colab – TechCrunch
Google has banned the training of AI systems that can be used to generate deepfakes on its Google Colaboratory platform. The updated terms of use, spotted over the weekend by Unite.ai and BleepingComputer, includes deepfakes-related work in the list of disallowed projects. Colaboratory, or Colab for short, spun out from an internal Google Research project in late 2017. It's designed to allow anyone to write and execute arbitrary Python code through a web browser, particularly code for machine learning, education and data analysis. For the purpose, Google provides both free and paying Colab users access to hardware including GPUs and Google's custom-designed, AI-accelerating tensor processing units (TPUs).
FLI March 2022 Newsletter - Future of Life Institute
This Interesting Engineering piece highlights how even an AI built to find'helpful drugs', when tweaked just a little, can find things that are rather less helpful. Collaborations Pharmaceuticals carried out a simple experiment to see what would happen if the AI they had built was slightly altered to look for chemical weapons, rather than medical treatments. According to a paper they published in Nature Machine Intelligence journal, the answer was not particularly reassuring. When reprogrammed to find chemical weapons, the machine learning algorithm found 40,000 possible options in just six hours. These researchers had'spent decades using computers and A.I. to improve human health', yet they admitted, after the experiment, that they had been'naive in thinking about the potential misuse of trade'.
Fears Rise In Ukraine Of Use Of Chemical Weapons
The United States said Tuesday it has "credible information" that Russia may use "chemical agents" in its offensive to take the besieged Ukrainian city of Mariupol, reigniting concerns about the use of such prohibited weapons. While the West and Kyiv have been warning Moscow since the start of its invasion on February 24 against any use of chemical weapons, fears have grown this week after unconfirmed reports emerged that such weapons may have already been deployed. The Organization for the Prohibition of Chemical Weapons (OPCW) said Tuesday that it was "concerned" by allegations that chemical weapons had been used in Mariupol, a strategic port city besieged by Russian forces in the east of Ukraine and the scene of heavy fighting. The OPCW, to which both Russia and Ukraine belong, referred to "accusations leveled by both sides around possible misuse of toxic chemicals." The Ukrainian Azov battalion, which is engaged in the defense of Mariupol, said Monday that a Russian drone had dropped a "poisonous substance" on soldiers and civilians in Mariupol.
- Europe > Ukraine > Donetsk Oblast > Mariupol (0.98)
- North America > United States (0.73)
- Asia > Russia (0.47)
- (3 more...)